kitti 2015
- Oceania > Australia (0.05)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.05)
- North America > United States > California > Los Angeles County > Long Beach (0.05)
- North America > Canada (0.05)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.05)
- Asia > South Korea > Seoul > Seoul (0.05)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- North America > United States > Virginia (0.04)
- North America > United States > California > Merced County > Merced (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- North America > United States > Ohio > Franklin County > Columbus (0.04)
- North America > United States > New York > Tompkins County > Ithaca (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Asia > Japan > Honshū > Chūbu > Ishikawa Prefecture > Kanazawa (0.04)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.93)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
Hierarchical Neural Architecture Search for Deep Stereo Matching - Supplementary Materials
KITTI 2012 contains 194 training image pairs and 195 test image pairs. We use a maximum disparity level of 192 in this dataset. Most of the stereo pairs are indoor scenes with handcrafted layouts. This dataset contains many thin objects and large disparity ranges. We provide more qualitative results on the SceneFlow, KITTI 2012, KITTI 2015 and Middlebury datasets in Figure 1 2 3 4, respectively.
- Oceania > Australia (0.05)
- North America > Canada (0.05)
- North America > United States > California > Los Angeles County > Long Beach (0.05)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.05)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.05)
- Asia > South Korea > Seoul > Seoul (0.05)
Matching neural paths: transfer from recognition to correspondence search
Nikolay Savinov, Lubor Ladicky, Marc Pollefeys
Many machine learning tasks require finding per-part correspondences between objects. In this work we focus on low-level correspondences -- a highly ambiguous matching problem. We propose to use a hierarchical semantic representation of the objects, coming from a convolutional neural network, to solve this ambiguity. Training it for low-level correspondence prediction directly might not be an option in some domains where the ground-truth correspondences are hard to obtain. We show how transfer from recognition can be used to avoid such training. Our idea is to mark parts as "matching" if their features are close to each other at all the levels of convolutional feature hierarchy (neural paths). Although the overall number of such paths is exponential in the number of layers, we propose a polynomial algorithm for aggregating all of them in a single backward pass. The empirical validation is done on the task of stereo correspondence and demonstrates that we achieve competitive results among the methods which do not use labeled target domain data.
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- North America > United States > Virginia (0.04)
- North America > United States > California > Merced County > Merced (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
TiCoSS: Tightening the Coupling between Semantic Segmentation and Stereo Matching within A Joint Learning Framework
Tang, Guanfeng, Wu, Zhiyuan, Li, Jiahang, Zhong, Ping, Chen, Xieyuanli, Lu, Huiming, Fan, Rui
Semantic segmentation and stereo matching, respectively analogous to the ventral and dorsal streams in our human brain, are two key components of autonomous driving perception systems. Addressing these two tasks with separate networks is no longer the mainstream direction in developing computer vision algorithms, particularly with the recent advances in large vision models and embodied artificial intelligence. The trend is shifting towards combining them within a joint learning framework, especially emphasizing feature sharing between the two tasks. The major contributions of this study lie in comprehensively tightening the coupling between semantic segmentation and stereo matching. Specifically, this study introduces three novelties: (1) a tightly coupled, gated feature fusion strategy, (2) a hierarchical deep supervision strategy, and (3) a coupling tightening loss function. The combined use of these technical contributions results in TiCoSS, a state-of-the-art joint learning framework that simultaneously tackles semantic segmentation and stereo matching. Through extensive experiments on the KITTI and vKITTI2 datasets, along with qualitative and quantitative analyses, we validate the effectiveness of our developed strategies and loss function, and demonstrate its superior performance compared to prior arts, with a notable increase in mIoU by over 9%. Our source code will be publicly available at mias.group/TiCoSS upon publication.
- Transportation (0.34)
- Information Technology (0.34)
- Automobiles & Trucks (0.34)